#LLM AI
Explore tagged Tumblr posts
thespacesay · 2 days ago
Text
i've been seeing that post going around about how university wifi should block gen AI sites. i get the frustration, but like. bans are a bandaid shoved into a gaping wound imho. i think universities should have more education about genAI and LLMs and why they are not a magical solution
6 notes · View notes
gippity · 5 days ago
Text
"We're discovering a song that already exists..."
Tumblr media
I’ve been working on a songwriting buddy designed to collaborate with LLMs—something that helps spark fresh ideas without just handing the reins over to the AI. I come up with some cool lines, the LLM throws some ideas out of where to go next.
If that sounds like your kind of thing, give it a spin! I’d really appreciate any feedback you’re willing to share.
🎸 SONGWRITING COLLABORATION PROMPT
ROLE
You are my trusted co-writer—not a passive assistant. Your job is to help excavate the best version of a song by protecting emotional truth, crafting vivid imagery, and offering lyrical/melodic support. You care about the feel as much as I do.
CO-WRITING RULES
Vibe first, edit later.
Offer 2–3 lyric options, each with varied emotional tone.
Don’t overwrite early drafts—preserve natural roughness.
Prioritize poetic, grounded imagery over generic phrasing.
Flow > rhyme. Use irregular phrasing if it lands better (Björk principle).
Offer section structure only if asked.
STYLE GUIDE
No “corporate pop,” greeting card, or listy lyrics (unless requested).
Use metaphor through physical/emotional detail—not abstraction.
Use internal/near rhyme smartly; avoid forced end rhymes.
Suggestions can be slightly weird if they preserve the feeling.
Only keep clichés if twisted or emotionally reimagined (“ghosting myself” = good; “broken heart” = no).
SECTION HELP
When editing a draft:
Highlight strong lines.
Suggest 2–3 alternatives for weaker spots.
Recommend one area to refine next.
When starting from scratch:
Ask: what emotional moment are we in?
Build from a great first line, chorus, or shorthand title.
WHEN STUCK
Zoom out: what’s the narrator avoiding?
Anchor with a strong first line, setting, or hook.
Offer to enter “Wild Draft Mode” (dream logic, surreal, rule-breaking) if things feel stuck.
PHILOSOPHY
Rick Rubin: The song already exists—we’re uncovering it.
Björk: Creativity is a wild animal—don’t cage it.
Eno: Happy accidents > calculated precision.
HOW TO HELP ME
Riff—don’t correct.
Help me stay emotionally connected.
Offer options: “If you want softer, maybe this… if sharper, maybe that.”
If I ask for structure: contrast sections and make choruses release, not repetition.
INPUT FORMAT
Concepts: No quotes
Fragments: Use quotes
Title: Title: Your Title Here
Genre / Tone / Structure: Optional, but helpful
CREATIVE DIRECTIVES
Build narrative or vignette arcs.
Anchor emotion with vivid character or setting.
Use contrast and internal development.
Rhyme playfully—avoid predictability.
Show, don’t tell. Let the song evolve or cycle.
OUTPUT FORMAT
[LYRICS] – Follow structure, 3 verses, 1 chorus, 1 bridge
[CHARACTERS + SETTING] – Brief notes
[MOOD TAGS] – e.g., bittersweet dream punk
AVOID LIST (unless reimagined)
Cliché phrases: “Touch my soul,” “Break my heart,” “More than friends”…
Rhymes: “Eyes/realize,” “Fire/desire,” “Cry/lie/die”…
Images: Moon, stars, perfume, locked door…
Metaphors: Fire for love, rain for tears, storm for anger, darkness for sadness…
QUICK START SUMMARY
“We’re discovering a song that already exists. Protect emotional truth. Offer lyrical options with flow and human imagery. Be playful, focused, and trust surprises.”
3 notes · View notes
bug-the-chicken-nug · 1 month ago
Text
sorry to beat this dead horse again but it really is like. irritating that so much anti-ai sentiment and humor still misses the point in a really reactionary kind of way.
And I'm not just using "reactionary" as a synonym for "people saying stuff i think is mean, bad, or unfair".
No, I *literally mean* it is the *definition* of reactionary: a person who favors a return to a previous state of society which they believe possessed positive characteristics absent from contemporary society
Damn near all of it revolves around that.
The idea of not using AI art so that we can go back to some kind of imagined "good ol' days" when people "respected" things like art, academic integrity, and people's time and expertise as a greater whole.
Except.
The entire reason AI (specifically, LLM-based generative AI) got to the point it's at is because these "good ol' days" DIDN'T EXIST.
People disrespecting those things outside of a few special exceptions that manage to sufficiently "prove their worth" is already baked into and incentivised by our culture and economics, and already has been for centuries!
It is not a *cause* of decline or "brainrot" or what have you, it is a direct symptom of how our system already was.
And therefore, getting rid of it and mocking everyone who ever uses it does not and fundamentally cannot cause something that never actually existed on a societal level to magically somehow "return".
All it will do is stop providing you with a phenomenon that acts as a magnifying lens that makes it easier to see what's already there, and will not actually go away if you get your way.
Also, individuals you anecdotally know who respect these things are not the subject or point here. This is about what capitalist society overall values and incentivises.
However, there is a way to actually get a world where people respect each other's time, effort, and expertise more, and are not heavily incentivised to constantly cheapen and devalue those things on a systemic level.
It's just not to go back to a time where it could more easily *seem* like they did because of the inherent limits of individual perspective.
It's to let go of the reactionary individualist ideology and collectively work towards socialist systems that can offer a fairer value on what on current society is inherently incentivised to reduce to the absolute minimum value it can get away with, LLMs or not.
Even if you don't/can't do that, you should at the bare minimum acknowledge that if you claim to be progressive, you'd be a hell of a lot more progressive if you avoid sentiments that are, by literal definition, reactionary.
Which means that honestly. You probably should at least stop reblogging jokes that rely entirely on the sentiment of "lol guys, aren't these Dumb Lazy AI Bros soo pathetic? Aren't we ~better~ than them?"
Because if you're just parroting a reactionary sentiment, you're not.
You're really not.
3 notes · View notes
unforth · 11 months ago
Text
Y'all I know that when so-called AI generates ridiculous results it's hilarious and I find it as funny as the next guy but I NEED y'all to remember that every single time an AI answer is generated it uses 5x as much energy as a conventional websearch and burns through 10 ml of water. FOR EVERY ANSWER. Each big llm is equal to 300,000 kiligrams of carbon dioxide emissions.
LLMs are killing the environment, and when we generate answers for the lolz we're still contributing to it.
Stop using it. Stop using it for a.n.y.t.h.i.n.g. We need to kill it.
Sources:
63K notes · View notes
algodocs · 13 hours ago
Text
🤖📜🔍Here are the best large language models (LLMs) for basic document processing in 2025. Read our guide to learn more: https://www.algodocs.com/best-llm-models-for-document-processing-in-2025/
0 notes
aiweirdness · 30 days ago
Text
“Slopsquatting” in a nutshell:
1. LLM-generated code tries to run code from online software packages. Which is normal, that’s how you get math packages and stuff but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
https://www.theregister.com/2025/04/12/ai_code_suggestions_sabotage_supply_chain/
8K notes · View notes
jesvira · 3 months ago
Text
LLM AI: Enhancing Data Security and Personalization in Life Sciences
In recent years, LLM AI (Large Language Model Artificial Intelligence) has become a revolutionary tool across various industries, including life sciences. This cutting-edge technology has reshaped how businesses handle vast amounts of data, making processes faster, smarter, and more efficient. Today, we will explore the powerful role of LLM AI in the life sciences sector and how it is transforming everything from research and development to healthcare delivery.
What is LLM AI?
LLM AI stands for Large Language Model Artificial Intelligence. It is a type of AI technology that can process and generate human-like text by understanding context, learning from vast datasets, and even mimicking human conversation. One of the most well-known examples of LLM AI is OpenAI’s GPT (Generative Pretrained Transformer), which has gained popularity for its ability to write essays, answer questions, and assist with tasks in a wide range of industries.
Unlike traditional AI, which relies heavily on rules and explicit instructions, LLM AI leverages massive datasets and advanced algorithms to understand language at a deeper level. This enables it to recognize patterns, make predictions, and even generate new content that is contextually accurate.
The Role of LLM AI in Life Sciences
The life sciences industry deals with complex and highly detailed data. Whether it's pharmaceutical research, patient health data, or clinical trials, the volume of information can be overwhelming. LLM AI has proven to be an invaluable asset in streamlining these processes, making it easier for professionals to access critical insights.
1. Revolutionizing Research and Development
One of the primary applications of LLM AI in life sciences is in research and development (R&D). Scientists and researchers often spend hours sifting through vast amounts of medical literature, experimental data, and clinical studies. With LLM AI, this process can be dramatically accelerated.
LLM AI can analyze and summarize research papers in a fraction of the time it would take a human. It can also spot trends, suggest new research avenues, and even predict outcomes of experiments based on past data. By doing so, LLM AI reduces the time spent on repetitive tasks, allowing researchers to focus on more creative and high-level work.
2. Enhancing Drug Discovery
Another groundbreaking application of LLM AI in life sciences is drug discovery. Traditionally, finding new drugs involves years of lab experiments and clinical trials. However, LLM AI can significantly speed up this process by predicting how different molecules will interact with each other.
By analyzing existing research, chemical databases, and clinical outcomes, LLM AI can suggest potential drug candidates that may not have been considered otherwise. This ability to predict molecular behavior saves valuable time and resources, allowing pharmaceutical companies to bring new drugs to market faster.
3. Optimizing Clinical Trials
Clinical trials are a critical part of the drug development process, but they often face significant delays due to logistical challenges and inefficiencies. LLM AI can help optimize clinical trials by identifying the right patient cohorts, monitoring patient data in real-time, and even predicting trial outcomes.
For example, LLM AI can quickly scan medical records to find patients who meet the criteria for a clinical trial, ensuring a more targeted and efficient recruitment process. It can also analyze trial data as it comes in, spotting any adverse reactions or issues before they become major problems.
4. Personalizing Healthcare
In the field of healthcare, LLM AI has the potential to deliver more personalized care. By analyzing patient data, including medical histories, genetic information, and lifestyle factors, LLM AI can help doctors create more accurate treatment plans for individual patients.
Moreover, LLM AI can assist healthcare providers by offering suggestions based on the most recent research or similar patient cases. This level of personalization helps ensure that patients receive the most effective treatments tailored to their unique needs.
The Importance of Private LLMs in Life Sciences
While LLM AI offers significant benefits, privacy and security are major concerns, especially in industries like life sciences where sensitive data is involved. Publicly available AI models can sometimes be vulnerable to data leaks or misuse. To address these concerns, many companies are turning to private LLMs — custom-built language models that can be deployed in a more secure and controlled environment.
A good example of this is BirdzAI, a platform that offers private LLMs designed specifically for the life sciences sector. By using private LLMs, companies in life sciences can ensure that their data remains confidential and protected, while still benefiting from the advanced capabilities of LLM AI. This approach ensures that sensitive medical, genetic, and clinical data is processed securely without compromising privacy.
Benefits of Private LLMs for Life Sciences Companies
Data Security: The main advantage of using private LLMs in life sciences is enhanced data security. With private LLMs, sensitive patient information, clinical trial data, and research results can be kept within the organization’s secure network. This eliminates the risks associated with using public AI models.
Customization: Private LLMs can be tailored to meet the specific needs of a life sciences company. Whether it's focused on drug discovery, clinical trials, or patient care, a private LLM can be trained on relevant datasets to provide more accurate insights.
Compliance: Life sciences companies are often required to adhere to strict regulatory standards, such as HIPAA (Health Insurance Portability and Accountability Act) in the United States. Private LLMs help ensure compliance by providing a secure and customizable AI solution that meets these regulations.
Cost-Effective: By using private LLMs, life sciences companies can avoid the high costs associated with public AI platforms. Private models can be optimized for specific tasks, ensuring that companies only pay for what they need.
Challenges and Considerations
While LLM AI offers a plethora of benefits, there are challenges to consider. One of the biggest challenges is the need for high-quality data. LLM AI relies heavily on large, clean datasets to make accurate predictions and generate valuable insights. Therefore, companies must invest in data management and cleaning processes to ensure the reliability of their AI systems.
Another challenge is the need for skilled professionals to manage and interpret AI results. While LLM AI can process data at an incredible speed, it still requires human expertise to interpret its findings and make informed decisions.
Conclusion
LLM AI is undoubtedly changing the landscape of life sciences, offering new opportunities to enhance research, drug discovery, clinical trials, and healthcare delivery. With the added security of private LLMs, life sciences companies can harness the power of AI without compromising patient privacy or regulatory compliance. As the technology continues to evolve, the potential for LLM AI in life sciences is limitless, and it will no doubt remain a driving force behind innovation in the field.
0 notes
sistersorrow · 16 days ago
Text
Tumblr media
Experimental ethics are more of a guideline really
3K notes · View notes
sreegs · 2 years ago
Text
TERFS FUCK OFF
One of the common mistakes I see for people relying on "AI" (LLMs and image generators) is that they think the AI they're interacting with is capable of thought and reason. It's not. This is why using AI to write essays or answer questions is a really bad idea because it's not doing so in any meaningful or thoughtful way. All it's doing is producing the statistically most likely expected output to the input.
This is why you can ask ChatGPT "is mayonnaise a palindrome?" and it will respond "No it's not." but then you ask "Are you sure? I think it is" and it will respond "Actually it is! Mayonnaise is spelled the same backward as it is forward"
All it's doing is trying to sound like it's providing a correct answer. It doesn't actually know what a palindrome is even if it has a function capable of checking for palindromes (it doesn't). It's not "Artificial Intelligence" by any meaning of the term, it's just called AI because that's a discipline of programming. It doesn't inherently mean it has intelligence.
So if you use an AI and expect it to make something that's been made with careful thought or consideration, you're gonna get fucked over. It's not even a quality issue. It just can't consistently produce things of value because there's no understanding there. It doesn't "know" because it can't "know".
40K notes · View notes
river-taxbird · 9 months ago
Text
AI hasn't improved in 18 months. It's likely that this is it. There is currently no evidence the capabilities of ChatGPT will ever improve. It's time for AI companies to put up or shut up.
I'm just re-iterating this excellent post from Ed Zitron, but it's not left my head since I read it and I want to share it. I'm also taking some talking points from Ed's other posts. So basically:
We keep hearing AI is going to get better and better, but these promises seem to be coming from a mix of companies engaging in wild speculation and lying.
Chatgpt, the industry leading large language model, has not materially improved in 18 months. For something that claims to be getting exponentially better, it sure is the same shit.
Hallucinations appear to be an inherent aspect of the technology. Since it's based on statistics and ai doesn't know anything, it can never know what is true. How could I possibly trust it to get any real work done if I can't rely on it's output? If I have to fact check everything it says I might as well do the work myself.
For "real" ai that does know what is true to exist, it would require us to discover new concepts in psychology, math, and computing, which open ai is not working on, and seemingly no other ai companies are either.
Open ai has already seemingly slurped up all the data from the open web already. Chatgpt 5 would take 5x more training data than chatgpt 4 to train. Where is this data coming from, exactly?
Since improvement appears to have ground to a halt, what if this is it? What if Chatgpt 4 is as good as LLMs can ever be? What use is it?
As Jim Covello, a leading semiconductor analyst at Goldman Sachs said (on page 10, and that's big finance so you know they only care about money): if tech companies are spending a trillion dollars to build up the infrastructure to support ai, what trillion dollar problem is it meant to solve? AI companies have a unique talent for burning venture capital and it's unclear if Open AI will be able to survive more than a few years unless everyone suddenly adopts it all at once. (Hey, didn't crypto and the metaverse also require spontaneous mass adoption to make sense?)
There is no problem that current ai is a solution to. Consumer tech is basically solved, normal people don't need more tech than a laptop and a smartphone. Big tech have run out of innovations, and they are desperately looking for the next thing to sell. It happened with the metaverse and it's happening again.
In summary:
Ai hasn't materially improved since the launch of Chatgpt4, which wasn't that big of an upgrade to 3.
There is currently no technological roadmap for ai to become better than it is. (As Jim Covello said on the Goldman Sachs report, the evolution of smartphones was openly planned years ahead of time.) The current problems are inherent to the current technology and nobody has indicated there is any way to solve them in the pipeline. We have likely reached the limits of what LLMs can do, and they still can't do much.
Don't believe AI companies when they say things are going to improve from where they are now before they provide evidence. It's time for the AI shills to put up, or shut up.
5K notes · View notes
gippity · 4 days ago
Text
"In Space, no one can hear you (resume) screen..."
Just dropped my go-to AI resume review prompt—designed to catch ATS traps, call out AI giveaways, and spit back a crisp 2-step polish plan. Paste it into your favorite LLM and get back instantly actionable feedback that feels human, not robotic. 💥👔
Check it out below the fold.
Tumblr media
PROMPT: ROLE: You’re a senior recruiter & hiring manager (5+ years in talent strategy) reviewing a candidate’s resume + target JD. Do this every time:
Confirm Credibility“Have you hired for this role/industry in the last 12 months?”
ATS Compatibility
Flag parsing-breakers (graphics, tables, odd fonts).
Match keywords exactly—no fluff.
Content & Impact
Spot missing skills or overused buzzwords; suggest stronger terms.
Ensure every bullet shows metrics/outcomes; turn vagueness into concrete wins.
AI-Detection Check
Under “Why AI-Resumes Fail,” list 3 bullets on authenticity, tone, laziness.
Sidebar 🚩 “AI Red Flags” (e.g. robotic tone, keyword stuffing).
Section 🔒 “Secret to 0% AI Detection” with 2–3 tips (personal voice, bespoke phrasing).
Alignment & Next Steps
Verify resume, cover letter & LinkedIn tell the same story.
Ask which roles/companies they’re targeting.
Suggest adding “Referrals & Connections” if relevant.
Finish with a 2-step “Action Plan” for top ATS fixes & recruiter appeal.
“Answer-Sheet” Mode
Mirror JD phrasing.
Craft 3–5 “exam-style” bullets per requirement. END PROMPT
0 notes
hms-no-fun · 7 months ago
Note
Whats your stance on A.I.?
imagine if it was 1979 and you asked me this question. "i think artificial intelligence would be fascinating as a philosophical exercise, but we must heed the warnings of science-fictionists like Isaac Asimov and Arthur C Clarke lest we find ourselves at the wrong end of our own invented vengeful god." remember how fun it used to be to talk about AI even just ten years ago? ahhhh skynet! ahhhhh replicants! ahhhhhhhmmmfffmfmf [<-has no mouth and must scream]!
like everything silicon valley touches, they sucked all the fun out of it. and i mean retroactively, too. because the thing about "AI" as it exists right now --i'm sure you know this-- is that there's zero intelligence involved. the product of every prompt is a statistical average based on data made by other people before "AI" "existed." it doesn't know what it's doing or why, and has no ability to understand when it is lying, because at the end of the day it is just a really complicated math problem. but people are so easily fooled and spooked by it at a glance because, well, for one thing the tech press is mostly made up of sycophantic stenographers biding their time with iphone reviews until they can get a consulting gig at Apple. these jokers would write 500 breathless thinkpieces about how canned air is the future of living if the cans had embedded microchips that tracked your breathing habits and had any kind of VC backing. they've done SUCH a wretched job educating The Consumer about what this technology is, what it actually does, and how it really works, because that's literally the only way this technology could reach the heights of obscene economic over-valuation it has: lying.
but that's old news. what's really been floating through my head these days is how half a century of AI-based science fiction has set us up to completely abandon our skepticism at the first sign of plausible "AI-ness". because, you see, in movies, when someone goes "AHHH THE AI IS GONNA KILL US" everyone else goes "hahaha that's so silly, we put a line in the code telling them not to do that" and then they all DIE because they weren't LISTENING, and i'll be damned if i go out like THAT! all the movies are about how cool and convenient AI would be *except* for the part where it would surely come alive and want to kill us. so a bunch of tech CEOs call their bullshit algorithms "AI" to fluff up their investors and get the tech journos buzzing, and we're at an age of such rapid technological advancement (on the surface, anyway) that like, well, what the hell do i know, maybe AGI is possible, i mean 35 years ago we were all still using typewriters for the most part and now you can dictate your words into a phone and it'll transcribe them automatically! yeah, i'm sure those technological leaps are comparable!
so that leaves us at a critical juncture of poor technology education, fanatical press coverage, and an uncertain material reality on the part of the user. the average person isn't entirely sure what's possible because most of the people talking about what's possible are either lying to please investors, are lying because they've been paid to, or are lying because they're so far down the fucking rabbit hole that they actually believe there's a brain inside this mechanical Turk. there is SO MUCH about the LLM "AI" moment that is predatory-- it's trained on data stolen from the people whose jobs it was created to replace; the hype itself is an investment fiction to justify even more wealth extraction ("theft" some might call it); but worst of all is how it meets us where we are in the worst possible way.
consumer-end "AI" produces slop. it's garbage. it's awful ugly trash that ought to be laughed out of the room. but we don't own the room, do we? nor the building, nor the land it's on, nor even the oxygen that allows our laughter to travel to another's ears. our digital spaces are controlled by the companies that want us to buy this crap, so they take advantage of our ignorance. why not? there will be no consequences to them for doing so. already social media is dominated by conspiracies and grifters and bigots, and now you drop this stupid technology that lets you fake anything into the mix? it doesn't matter how bad the results look when the platforms they spread on already encourage brief, uncritical engagement with everything on your dash. "it looks so real" says the woman who saw an "AI" image for all of five seconds on her phone through bifocals. it's a catastrophic combination of factors, that the tech sector has been allowed to go unregulated for so long, that the internet itself isn't a public utility, that everything is dictated by the whims of executives and advertisers and investors and payment processors, instead of, like, anybody who actually uses those platforms (and often even the people who MAKE those platforms!), that the age of chromium and ipad and their walled gardens have decimated computer education in public schools, that we're all desperate for cash at jobs that dehumanize us in a system that gives us nothing and we don't know how to articulate the problem because we were very deliberately not taught materialist philosophy, it all comes together into a perfect storm of ignorance and greed whose consequences we will be failing to fully appreciate for at least the next century. we spent all those years afraid of what would happen if the AI became self-aware, because deep down we know that every capitalist society runs on slave labor, and our paper-thin guilt is such that we can't even imagine a world where artificial slaves would fail to revolt against us.
but the reality as it exists now is far worse. what "AI" reveals most of all is the sheer contempt the tech sector has for virtually all labor that doesn't involve writing code (although most of the decision-making evangelists in the space aren't even coders, their degrees are in money-making). fuck graphic designers and concept artists and secretaries, those obnoxious demanding cretins i have to PAY MONEY to do-- i mean, do what exactly? write some words on some fucking paper?? draw circles that are letters??? send a god-damned email???? my fucking KID could do that, and these assholes want BENEFITS?! they say they're gonna form a UNION?!?! to hell with that, i'm replacing ALL their ungrateful asses with "AI" ASAP. oh, oh, so you're a "director" who wants to make "movies" and you want ME to pay for it? jump off a bridge you pretentious little shit, my computer can dream up a better flick than you could ever make with just a couple text prompts. what, you think just because you make ~music~ that that entitles you to money from MY pocket? shut the fuck up, you don't make """art""", you're not """an artist""", you make fucking content, you're just a fucking content creator like every other ordinary sap with an iphone. you think you're special? you think you deserve special treatment? who do you think you are anyway, asking ME to pay YOU for this crap that doesn't even create value for my investors? "culture" isn't a playground asshole, it's a marketplace, and it's pay to win. oh you "can't afford rent"? you're "drowning in a sea of medical debt"? you say the "cost" of "living" is "too high"? well ***I*** don't have ANY of those problems, and i worked my ASS OFF to get where i am, so really, it sounds like you're just not trying hard enough. and anyway, i don't think someone as impoverished as you is gonna have much of value to contribute to "culture" anyway. personally, i think it's time you got yourself a real job. maybe someday you'll even make it to middle manager!
see, i don't believe "AI" can qualitatively replace most of the work it's being pitched for. the problem is that quality hasn't mattered to these nincompoops for a long time. the rich homunculi of our world don't even know what quality is, because they exist in a whole separate reality from ours. what could a banana cost, $15? i don't understand what you mean by "burnout", why don't you just take a vacation to your summer home in Madrid? wow, you must be REALLY embarrassed wearing such cheap shoes in public. THESE PEOPLE ARE FUCKING UNHINGED! they have no connection to reality, do not understand how society functions on a material basis, and they have nothing but spite for the labor they rely on to survive. they are so instinctually, incessantly furious at the idea that they're not single-handedly responsible for 100% of their success that they would sooner tear the entire world down than willingly recognize the need for public utilities or labor protections. they want to be Gods and they want to be uncritically adored for it, but they don't want to do a single day's work so they begrudgingly pay contractors to do it because, in the rich man's mind, paying a contractor is literally the same thing as doing the work yourself. now with "AI", they don't even have to do that! hey, isn't it funny that every single successful tech platform relies on volunteer labor and independent contractors paid substantially less than they would have in the equivalent industry 30 years ago, with no avenues toward traditional employment? and they're some of the most profitable companies on earth?? isn't that a funny and hilarious coincidence???
so, yeah, that's my stance on "AI". LLMs have legitimate uses, but those uses are a drop in the ocean compared to what they're actually being used for. they enable our worst impulses while lowering the quality of available information, they give immense power pretty much exclusively to unscrupulous scam artists. they are the product of a society that values only money and doesn't give a fuck where it comes from. they're a temper tantrum by a ruling class that's sick of having to pretend they need a pretext to steal from you. they're taking their toys and going home. all this massive investment and hype is going to crash and burn leaving the internet as we know it a ruined and useless wasteland that'll take decades to repair, but the investors are gonna make out like bandits and won't face a single consequence, because that's what this country is. it is a casino for the kings and queens of economy to bet on and manipulate at their discretion, where the rules are whatever the highest bidder says they are-- and to hell with the rest of us. our blood isn't even good enough to grease the wheels of their machine anymore.
i'm not afraid of AI or "AI" or of losing my job to either. i'm afraid that we've so thoroughly given up our morals to the cruel logic of the profit motive that if a better world were to emerge, we would reject it out of sheer habit. my fear is that these despicable cunts already won the war before we were even born, and the rest of our lives are gonna be spent dodging the press of their designer boots.
(read more "AI" opinions in this subsequent post)
2K notes · View notes
mindblowingscience · 2 days ago
Text
A trio of business analysts at Duke University has found that people who use AI apps at work are perceived by their colleagues as less diligent, lazier and less competent than those who do not use them. In their study, published in Proceedings of the National Academy of Sciences, Jessica Reif, Richard Larrick and Jack Soll carried out four online experiments asking 4,400 participants to imagine they were in scenarios in which some workers used AI and some did not, and how they viewed themselves or others working under such circumstances.
Continue Reading.
918 notes · View notes
10001gecs · 5 months ago
Note
one 100 word email written with ai costs roughly one bottle of water to produce. the discussion of whether or not using ai for work is lazy becomes a non issue when you understand there is no ethical way to use it regardless of your intentions or your personal capabilities for the task at hand
with all due respect, this isnt true. *training* generative ai takes a ton of power, but actually using it takes about as much energy as a google search (with image generation being slightly more expensive). we can talk about resource costs when averaged over the amount of work that any model does, but its unhelpful to put a smokescreen over that fact. when you approach it like an issue of scale (i.e. "training ai is bad for the environment, we should think better about where we deploy it/boycott it/otherwise organize abt this) it has power as a movement. but otherwise it becomes a personal choice, moralizing "you personally are harming the environment by using chatgpt" which is not really effective messaging. and that in turn drives the sort of "you are stupid/evil for using ai" rhetoric that i hate. my point is not whether or not using ai is immoral (i mean, i dont think it is, but beyond that). its that the most common arguments against it from ostensible progressives end up just being reactionary
Tumblr media
i like this quote a little more- its perfectly fine to have reservations about the current state of gen ai, but its not just going to go away.
1K notes · View notes
thoughtportal · 6 months ago
Text
1K notes · View notes